10,387 research outputs found
On The Incompatibility of Faith and Intellectual Humility
Although the relationship between faith and intellectual humility has yet to be specifically addressed in the philosophical literature, there are reasons to believe that they are at least in some sense incompatible, especially when judging from pre-theoretical intuitions. In this paper I attempt to specify and explicate this incompatibility, which is found in specific conflicting epistemic attitudes they each respectively invite. I first suggest general definitions of both faith and intellectual humility (understood as intellectual virtues), building off current proposals in the literature, in an attempt to portray both in as broad and uncontroversial a manner as feasible. I then move to arguing how this prima facie incompatibility aligns with these understandings of faith and intellectual humility, and illustrate how this incompatibility is even clearer on one recent theory. I close by considering one avenue of response for those who want to maintain that, while conflicting in these ways, intellectual humility and faith can be simultaneously virtuous
The date of the Decalogue
Thesis (M.A.)--Boston Universit
The value of remote sensing techniques in supporting effective extrapolation across multiple marine spatial scales
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process
Exploiting Data Representation for Fault Tolerance
We explore the link between data representation and soft errors in dot
products. We present an analytic model for the absolute error introduced should
a soft error corrupt a bit in an IEEE-754 floating-point number. We show how
this finding relates to the fundamental linear algebra concepts of
normalization and matrix equilibration. We present a case study illustrating
that the probability of experiencing a large error in a dot product is
minimized when both vectors are normalized. Furthermore, when data is
normalized we show that the absolute error is less than one or very large,
which allows us to detect large errors. We demonstrate how this finding can be
used by instrumenting the GMRES iterative solver. We count all possible errors
that can be introduced through faults in arithmetic in the computationally
intensive orthogonalization phase, and show that when scaling is used the
absolute error can be bounded above by one
Comparison of Heart Rate Intensity in Practice, Conditioning, and Games in NCAA Division I Women Basketball Players
Background: An athlete’s heart rate (HR) is an important variable in quantifying the intensity of exercise. Workouts that increase HR are an important stimulus for training adaptations and conditioning. At other times, workouts that do not overly stress the HR may be desired to allow for recovery. The principle of specificity emphasizes that athletes should train specific to the way they will need to perform in competition. Because of this, monitoring HR during training and competition can be a useful tool. While exercise intensity in endurance sports has been previously investigated, less is known regarding the HR response in team sports, particularly women’s basketball.
Purpose: Compare the average HR response to basketball training and competition in: 1) open gym 5 on 5 scrimmage, 2) an actual basketball game against a different opponent, and 3) conditioning session.
Methods: We had an NCAA Division I women’s basketball team wear heart rate monitors for open gym scrimmages, actual games, and conditioning practices. For the open gym sessions, the team scrimmaged against each other 5v5 for ~90 minutes and the average HR over 4 open gym sessions was determined. For the actual games against other opponents, the average HR response for the team was averaged over 3 games. The conditioning sessions consisted of repeated, intermittent short sprint efforts over the course of 30-60 minutes, and the average HR over 7 conditioning sessions was calculated. The data that was collected was added to a spreadsheet where we used it to find the team’s average for both the scrimmages, games, and conditioning.
Results: During open gym scrimmages and conditioning sessions the women had a higher heart rate average as a whole team compared to the games. The games had the lowest HR out of all three conditions that were collected
Evaluating the Impact of SDC on the GMRES Iterative Solver
Increasing parallelism and transistor density, along with increasingly
tighter energy and peak power constraints, may force exposure of occasionally
incorrect computation or storage to application codes. Silent data corruption
(SDC) will likely be infrequent, yet one SDC suffices to make numerical
algorithms like iterative linear solvers cease progress towards the correct
answer. Thus, we focus on resilience of the iterative linear solver GMRES to a
single transient SDC. We derive inexpensive checks to detect the effects of an
SDC in GMRES that work for a more general SDC model than presuming a bit flip.
Our experiments show that when GMRES is used as the inner solver of an
inner-outer iteration, it can "run through" SDC of almost any magnitude in the
computationally intensive orthogonalization phase. That is, it gets the right
answer using faulty data without any required roll back. Those SDCs which it
cannot run through, get caught by our detection scheme
Impact of Heart Rate Intensity on Shooting Accuracy during Games in NCAA Division I Women Basketball Players
Shooting accuracy in basketball is key to winning games. While there are various factors as to why a team either makes or misses their shots, the intensity of play is likely a contributing factor. A player who has played the majority of the game would likely have a higher, more intense heart rate (HR). Depending on the athlete, this could impact shooting accuracy. Examining the relationship between HR intensity and shooting accuracy has not been looked at in a real game setting before. Therefore, we set out to determine the impact heart rate intensity has on shooting accuracy in a game setting.
Purpose: The purpose of this study was to determine the impact of heart rate intensity on shooting accuracy in a game setting in NCAA Division I female basketball players.
Methods: We examined the team stats for shooting accuracy from overall attempts, three point attempts, and free throws during five games. During games players wore HR monitors that transmitted to a mobile app that displayed their HR in real time. Every time a shot was attempted, we recorded what kind of shot, where on the floor it came from, whether it was made or missed, and the HR zone that the athlete was at when it took place. The HR zones that were compared were 1) 70-80% HR max, 2) 80-90% HR max, and 3) 90-100% HR max. These data were input into a spreadsheet to calculate the average team shooting percentage across these three HR zones for overall shooting, free throws, and 3-pointers.
Results: As indicated in the table, the team shooting percentage was highest for all types of shooting when players were at the lowest HR intensity. Shooting accuracy declined at higher HR intensities
Resilience in Numerical Methods: A Position on Fault Models and Methodologies
Future extreme-scale computer systems may expose silent data corruption (SDC)
to applications, in order to save energy or increase performance. However,
resilience research struggles to come up with useful abstract programming
models for reasoning about SDC. Existing work randomly flips bits in running
applications, but this only shows average-case behavior for a low-level,
artificial hardware model. Algorithm developers need to understand worst-case
behavior with the higher-level data types they actually use, in order to make
their algorithms more resilient. Also, we know so little about how SDC may
manifest in future hardware, that it seems premature to draw conclusions about
the average case. We argue instead that numerical algorithms can benefit from a
numerical unreliability fault model, where faults manifest as unbounded
perturbations to floating-point data. Algorithms can use inexpensive "sanity"
checks that bound or exclude error in the results of computations. Given a
selective reliability programming model that requires reliability only when and
where needed, such checks can make algorithms reliable despite unbounded
faults. Sanity checks, and in general a healthy skepticism about the
correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape
- …